71 research outputs found

    Real-time magnetic resonance imaging reveals distinct vocal tract configurations during spontaneous and volitional laughter

    Get PDF
    A substantial body of acoustic and behavioural evidence points to the existence of two broad categories of laughter in humans: spontaneous laughter that is emotionally genuine and somewhat involuntary, and volitional laughter that is produced on demand. In this study, we tested the hypothesis these are also physiologically distinct vocalisations, by measuring and comparing them using real-time MRI (rtMRI) of the vocal tract. Following Ruch & Ekman (2001), we further predicted that spontaneous laughter should be relatively less speech-like (i.e. less articulate) than volitional laughter. We collected rtMRI data from five adult human participants during spontaneous laughter, volitional laughter, and spoken vowels. We report distinguishable vocal tract shapes during the vocalic portions of these three vocalisation types, where volitional laughs were intermediate between spontaneous laughs and vowels. Inspection of local features within the vocal tract across the different vocalisation types offers some additional support for Ruch and Ekman’s (2001) predictions. We discuss our findings in light of a dual-pathway hypothesis for the neural control of human volitional and spontaneous vocal behaviours, identifying tongue shape and velum lowering as potential biomarkers of spontaneous laughter to be investigated in future research

    A dual larynx motor networks hypothesis

    Get PDF
    Humans are vocal modulators par excellence. This ability is supported in part by the dual representation of the laryngeal muscles in the motor cortex. Movement, however, is not the product of motor cortex alone but of a broader motor network. This network consists of brain regions which contain somatotopic maps that parallel the organisation in motor cortex. We therefore present a novel hypothesis that the dual laryngeal representation is repeated throughout the broader motor network. In support of the hypothesis we review existing literature which demonstrates the existence of network-wide somatotopy, and present initial evidence for the hypothesis’ plausibility. Understanding how this uniquely human phenotype in motor cortex interacts with broader brain networks is an important step toward understanding how humans evolved the ability to speak. We further suggest that this system may provide a means to study how individual components of the nervous system evolved within the context of neuronal networks

    Speech timing cues reveal deceptive speech in social deduction board games

    Get PDF
    The faculty of language allows humans to state falsehoods in their choice of words. However, while what is said might easily uphold a lie, how it is said may reveal deception. Hence, some features of the voice that are difficult for liars to control may keep speech mostly, if not always, honest. Previous research has identified that speech timing and voice pitch cues can predict the truthfulness of speech, but this evidence has come primarily from laboratory experiments, which sacrifice ecological validity for experimental control. We obtained ecologically valid recordings of deceptive speech while observing natural utterances from players of a popular social deduction board game, in which players are assigned roles that either induce honest or dishonest interactions. When speakers chose to lie, they were prone to longer and more frequent pauses in their speech. This finding is in line with theoretical predictions that lying is more cognitively demanding. However, lying was not reliably associated with vocal pitch. This contradicts predictions that increased physiological arousal from lying might increase muscular tension in the larynx, but is consistent with human specialisations that grant Homo sapiens sapiens an unusual degree of control over the voice relative to other primates. The present study demonstrates the utility of social deduction board games as a means of making naturalistic observations of human behaviour from semi-structured social interactions

    An open-source toolbox for measuring vocal tract shape from real-time magnetic resonance images

    Get PDF
    Real-time magnetic resonance imaging (rtMRI) is a technique that provides high-contrast videographic data of human anatomy in motion. Applied to the vocal tract, it is a powerful method for capturing the dynamics of speech and other vocal behaviours by imaging structures internal to the mouth and throat. These images provide a means of studying the physiological basis for speech, singing, expressions of emotion, and swallowing that are otherwise not accessible for external observation. However, taking quantitative measurements from these images is notoriously difficult. We introduce a signal processing pipeline that produces outlines of the vocal tract from the lips to the larynx as a quantification of the dynamic morphology of the vocal tract. Our approach performs simple tissue classification, but constrained to a researcher-specified region of interest. This combination facilitates feature extraction while retaining the domain-specific expertise of a human analyst. We demonstrate that this pipeline generalises well across datasets covering behaviours such as speech, vocal size exaggeration, laughter, and whistling, as well as producing reliable outcomes across analysts, particularly among users with domain-specific expertise. With this article, we make this pipeline available for immediate use by the research community, and further suggest that it may contribute to the continued development of fully automated methods based on deep learning algorithms

    Evolution of the speech‐ready brain: The voice/jaw connection in the human motor cortex

    Get PDF
    A prominent model of the origins of speech, known as the “frame/content” theory, posits that oscillatory lowering and raising of the jaw provided an evolutionary scaffold for the development of syllable structure in speech. Because such oscillations are nonvocal in most nonhuman primates, the evolution of speech required the addition of vocalization onto this scaffold in order to turn such jaw oscillations into vocalized syllables. In the present functional MRI study, we demonstrate overlapping somatotopic representations between the larynx and the jaw muscles in the human primary motor cortex. This proximity between the larynx and jaw in the brain might support the coupling between vocalization and jaw oscillations to generate syllable structure. This model suggests that humans inherited voluntary control of jaw oscillations from ancestral species, but added voluntary control of vocalization onto this via the evolution of a new brain area that came to be situated near the jaw region in the human motor cortex

    How does human motor cortex regulate vocal pitch in singers?

    Get PDF
    Vocal pitch is used as an important communicative device by humans, as found in the melodic dimension of both speech and song. Vocal pitch is determined by the degree of tension in the vocal folds of the larynx, which itself is influenced by complex and nonlinear interactions among the laryngeal muscles. The relationship between these muscles and vocal pitch has been described by a mathematical model in the form of a set of ‘control rules’. We searched for the biological implementation of these control rules in the larynx motor cortex of the human brain. We scanned choral singers with functional magnetic resonance imaging as they produced discrete pitches at four different levels across their vocal range. While the locations of the larynx motor activations varied across singers, the activation peaks for the four pitch levels were highly consistent within each individual singer. This result was corroborated using multi-voxel pattern analysis, which demonstrated an absence of patterned activations differentiating any pairing of pitch levels. The complex and nonlinear relationships between the multiple laryngeal muscles that control vocal pitch may obscure the neural encoding of vocal pitch in the brain

    Accessory to dissipate heat from transcranial magnetic stimulation coils

    Get PDF
    Background: Transcranial magnetic stimulation (TMS) produces magnetic pulses by passing a strong electrical current through coils of wire. Repeated stimulation accumulates heat, which places practical constraints on experimental design. New Method: We designed a condensation-free pre-chilled heat sink to extend the operational duration of transcranial magnetic stimulation coils. Results: The application of a pre-chilled heat sink reduced the rate of heating across all tests and extended the duration of stimulation before coil overheating, particularly in conditions where heat management was problematic. Comparison with Existing Method: Applying an external heat sink had the practical effect of extending the operational time of TMS coils by 5.8 to 19.3 minutes compared to standard operating procedures. Conclusion: Applying an external heat sink increases the quantity of data that can be collected within a single experimental session

    Poor neuro-motor tuning of the human larynx:A comparison of sung and whistled pitch imitation

    Get PDF
    Vocal imitation is a hallmark of human communication that underlies the capacity to learn to speak and sing. Even so, poor vocal imitation abilities are surprisingly common in the general population and even expert vocalists cannot match the precision of a musical instrument. Although humans have evolved a greater degree of control over the laryngeal muscles that govern voice production, this ability may be underdeveloped compared with control over the articulatory muscles, such as the tongue and lips, volitional control of which emerged earlier in primate evolution. Human participants imitated simple melodies by either singing (i.e. producing pitch with the larynx) or whistling (i.e. producing pitch with the lips and tongue). Sung notes were systematically biased towards each individual’s habitual pitch, which we hypothesize may act to conserve muscular effort. Furthermore, while participants who sung more precisely also whistled more precisely, sung imitations were less precise than whistled imitations. The laryngeal muscles that control voice production are under less precise control than the oral muscles that are involved in whistling. This imprecision may be due to the relatively recent evolution of volitional laryngeal-motor control in humans, which may be tuned just well enough for the coarse modulation of vocal-pitch in speech

    Speech with pauses sounds deceptive to listeners with and without hearing impairment

    Get PDF
    Purpose: Communication is as much persuasion as it is the transfer of information. This creates a tension between the interests of the speaker and those of the listener as dishonest speakers naturally attempt to hide deceptive speech, and listeners are faced with the challenge of sorting truths from lies. Hearing impaired listeners in particular may have differing levels of access to the acoustical cues that give away deceptive speech. A greater tendency towards speech pauses has been hypothesised to result from the cognitive demands of lying convincingly. Higher vocal pitch has also been hypothesised to mark the increased anxiety of a dishonest speaker.// Method: listeners with or without hearing impairments heard short utterances from natural conversations some of which had been digitally manipulated to contain either increased pausing or raised vocal pitch. Listeners were asked to guess whether each statement was a lie in a two alternative forced choice task. Participants were also asked explicitly which cues they believed had influenced their decisions.// Results: Statements were more likely to be perceived as a lie when they contained pauses, but not when vocal pitch was raised. This pattern held regardless of hearing ability. In contrast, both groups of listeners self-reported using vocal pitch cues to identify deceptive statements, though at lower rates than pauses.// Conclusions: Listeners may have only partial awareness of the cues that influence their impression of dishonesty. Hearing impaired listeners may place greater weight on acoustical cues according to the differing degrees of access provided by hearing aids./
    corecore